In medical image analysis, automated segmentation of multi-component anatomical structures, which often have a spectrum of potential anomalies and pathologies, is a challenging task. In this work, we develop a multi-step approach using U-Net-based neural networks to initially detect anomalies (bone marrow lesions, bone cysts) in the distal femur, proximal tibia and patella from 3D magnetic resonance (MR) images of the knee in individuals with varying grades of osteoarthritis. Subsequently, the extracted data are used for downstream tasks involving semantic segmentation of individual bone and cartilage volumes as well as bone anomalies. For anomaly detection, the U-Net-based models were developed to reconstruct the bone profiles of the femur and tibia in images via inpainting so anomalous bone regions could be replaced with close to normal appearances. The reconstruction error was used to detect bone anomalies. A second anomaly-aware network, which was compared to anomaly-na\"ive segmentation networks, was used to provide a final automated segmentation of the femoral, tibial and patellar bones and cartilages from the knee MR images containing a spectrum of bone anomalies. The anomaly-aware segmentation approach provided up to 58% reduction in Hausdorff distances for bone segmentations compared to the results from the anomaly-na\"ive segmentation networks. In addition, the anomaly-aware networks were able to detect bone lesions in the MR images with greater sensitivity and specificity (area under the receiver operating characteristic curve [AUC] up to 0.896) compared to the anomaly-na\"ive segmentation networks (AUC up to 0.874).
translated by 谷歌翻译
在医学图像分析中,许多疾病的微妙视觉特征要具有挑战性,尤其是由于缺乏配对数据。例如,在温和的阿尔茨海默氏病(AD)中,很难从纯成像数据中观察到脑组织萎缩,尤其是没有配对的AD和认知正常(CN)数据以进行比较。这项工作介绍了疾病发现甘(Didigan),这是一种基于弱的基于风格的框架,可发现和可视化细微的疾病特征。 Didigan了解了AD和CN视觉特征的疾病歧管,并将此歧管采样的样式代码施加到解剖结构“蓝图”上,以综合配对AD和CN磁共振图像(MRIS)。为了抑制生成的AD和CN对之间的非疾病相关变化,Didigan利用具有循环一致性和抗偏置的结构约束来实施解剖对应关系。当对阿尔茨海默氏病神经影像学计划(ADNI)数据集进行测试时,Didigan通过合成的配对AD和CN扫描显示了关键的AD特征(减少海马体积,心室增大和皮质结构的萎缩)。定性结果通过自动化的大脑体积分析来支持,其中还测量了脑组织结构的系统成对降低
translated by 谷歌翻译
Data scarcity is common in deep learning models for medical image segmentation. Previous works proposed multi-dataset learning, either simultaneously or via transfer learning to expand training sets. However, medical image datasets have diverse-sized images and features, and developing a model simultaneously for multiple datasets is challenging. This work proposes Fabric Image Representation Encoding Network (FIRENet), a universal architecture for simultaneous multi-dataset segmentation and transfer learning involving arbitrary numbers of dataset(s). To handle different-sized image and feature, a 3D fabric module is used to encapsulate many multi-scale sub-architectures. An optimal combination of these sub-architectures can be implicitly learnt to best suit the target dataset(s). For diverse-scale feature extraction, a 3D extension of atrous spatial pyramid pooling (ASPP3D) is used in each fabric node for a fine-grained coverage of rich-scale image features. In the first experiment, FIRENet performed 3D universal bone segmentation of multiple musculoskeletal datasets of the human knee, shoulder and hip joints and exhibited excellent simultaneous multi-dataset segmentation performance. When tested for transfer learning, FIRENet further exhibited excellent single dataset performance (when pre-training on a prostate dataset), as well as significantly improved universal bone segmentation performance. The following experiment involves the simultaneous segmentation of the 10 Medical Segmentation Decathlon (MSD) challenge datasets. FIRENet demonstrated good multi-dataset segmentation results and inter-dataset adaptability of highly diverse image sizes. In both experiments, FIRENet's streamlined multi-dataset learning with one unified network that requires no hyper-parameter tuning.
translated by 谷歌翻译
With the rise in high resolution remote sensing technologies there has been an explosion in the amount of data available for forest monitoring, and an accompanying growth in artificial intelligence applications to automatically derive forest properties of interest from these datasets. Many studies use their own data at small spatio-temporal scales, and demonstrate an application of an existing or adapted data science method for a particular task. This approach often involves intensive and time-consuming data collection and processing, but generates results restricted to specific ecosystems and sensor types. There is a lack of widespread acknowledgement of how the types and structures of data used affects performance and accuracy of analysis algorithms. To accelerate progress in the field more efficiently, benchmarking datasets upon which methods can be tested and compared are sorely needed. Here, we discuss how lack of standardisation impacts confidence in estimation of key forest properties, and how considerations of data collection need to be accounted for in assessing method performance. We present pragmatic requirements and considerations for the creation of rigorous, useful benchmarking datasets for forest monitoring applications, and discuss how tools from modern data science can improve use of existing data. We list a set of example large-scale datasets that could contribute to benchmarking, and present a vision for how community-driven, representative benchmarking initiatives could benefit the field.
translated by 谷歌翻译
National research evaluation initiatives and incentive schemes have previously chosen between simplistic quantitative indicators and time-consuming peer review, sometimes supported by bibliometrics. Here we assess whether artificial intelligence (AI) could provide a third alternative, estimating article quality using more multiple bibliometric and metadata inputs. We investigated this using provisional three-level REF2021 peer review scores for 84,966 articles submitted to the UK Research Excellence Framework 2021, matching a Scopus record 2014-18 and with a substantial abstract. We found that accuracy is highest in the medical and physical sciences Units of Assessment (UoAs) and economics, reaching 42% above the baseline (72% overall) in the best case. This is based on 1000 bibliometric inputs and half of the articles used for training in each UoA. Prediction accuracies above the baseline for the social science, mathematics, engineering, arts, and humanities UoAs were much lower or close to zero. The Random Forest Classifier (standard or ordinal) and Extreme Gradient Boosting Classifier algorithms performed best from the 32 tested. Accuracy was lower if UoAs were merged or replaced by Scopus broad categories. We increased accuracy with an active learning strategy and by selecting articles with higher prediction probabilities, as estimated by the algorithms, but this substantially reduced the number of scores predicted.
translated by 谷歌翻译
Generalizability of time series forecasting models depends on the quality of model selection. Temporal cross validation (TCV) is a standard technique to perform model selection in forecasting tasks. TCV sequentially partitions the training time series into train and validation windows, and performs hyperparameter optmization (HPO) of the forecast model to select the model with the best validation performance. Model selection with TCV often leads to poor test performance when the test data distribution differs from that of the validation data. We propose a novel model selection method, H-Pro that exploits the data hierarchy often associated with a time series dataset. Generally, the aggregated data at the higher levels of the hierarchy show better predictability and more consistency compared to the bottom-level data which is more sparse and (sometimes) intermittent. H-Pro performs the HPO of the lowest-level student model based on the test proxy forecasts obtained from a set of teacher models at higher levels in the hierarchy. The consistency of the teachers' proxy forecasts help select better student models at the lowest-level. We perform extensive empirical studies on multiple datasets to validate the efficacy of the proposed method. H-Pro along with off-the-shelf forecasting models outperform existing state-of-the-art forecasting methods including the winning models of the M5 point-forecasting competition.
translated by 谷歌翻译
Each student matters, but it is hardly for instructors to observe all the students during the courses and provide helps to the needed ones immediately. In this paper, we present StuArt, a novel automatic system designed for the individualized classroom observation, which empowers instructors to concern the learning status of each student. StuArt can recognize five representative student behaviors (hand-raising, standing, sleeping, yawning, and smiling) that are highly related to the engagement and track their variation trends during the course. To protect the privacy of students, all the variation trends are indexed by the seat numbers without any personal identification information. Furthermore, StuArt adopts various user-friendly visualization designs to help instructors quickly understand the individual and whole learning status. Experimental results on real classroom videos have demonstrated the superiority and robustness of the embedded algorithms. We expect our system promoting the development of large-scale individualized guidance of students.
translated by 谷歌翻译
近年来,深度学习导致了在城市驾驶场景中移动(即具有运动能力)物体的检测方面取得的巨大进展。监督方法通常需要大型培训集的注释;因此,人们对利用弱,半或自我监督的方法避免这种情况非常兴趣,并取得了很大的成功。虽然弱和半监督的方法需要一些注释,但自我监督的方法已经使用了诸如运动之类的线索来完全减轻注释的需求。但是,完全没有注释通常会降低其性能,而在运动组进行分组期间出现的歧义可以抑制其找到准确的物体边界的能力。在本文中,我们提出了一种称为SCT的新的自制移动对象检测方法。这同时使用运动提示和预期对象大小来提高检测性能,并预测3D方向边界框的密集网格以改善对象发现。我们在Kitti跟踪基准上的最先进的自我监督的移动对象检测方法TCR极大地超过了,并且实现了全面监督的PV-RCNN ++方法的30%以内IOUS <= 0.5。
translated by 谷歌翻译
FP8是加速深度学习训练推论以外的16位格式的自然发展。在本文中,我们提出了一个8位浮点(FP8)二进制互换格式,该格式由两个编码组成-E4M3(4位指数和3位Mantissa)和E5M2(5位指数和2位指数和2位Mantissa)。尽管E5M2遵循IEEE 754惯例代表特殊值的惯例,但E4M3的动态范围是通过不代表无限态,只有一个Mantissa Bit-Pattern来扩展NAN。我们证明了FP8格式对各种图像和语言任务的功效,从而有效地匹配了16位培训课程所达到的质量。我们的研究涵盖了主要的现代神经网络体系结构 - CNN,RNN和基于变压器的模型,使所有超参数与16位基线训练课程保持不变。我们的培训实验包括大型,最多175b参数,语言模型。我们还检查了使用16位格式训练的语言模型的FP8训练后定量化,该格式抗拒固定点INT8量化。
translated by 谷歌翻译
知识图(kgs)已被证明是构建数据的可靠方法。他们可以提供有关文化遗产收藏的丰富情境信息。但是,文化遗产库库远非完整。他们通常会缺少重要的属性,例如地理位置,尤其是对于雕塑,移动或室内实体,例如绘画。在本文中,我们首先提出了一个框架,用于从各种数据源及其连接的多跳知识中汲取有关有形文化遗产实体的知识。其次,我们提出了一个多视图学习模型,用于估计给定的文化遗产实体之间的相对距离,该模型基于实体的地理和知识联系。
translated by 谷歌翻译